Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Data Brief ; 54: 110303, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38559821

RESUMO

WorkStress3D is a comprehensive collection of multimodal data for the research of stress in the workplace. This dataset contains biosignals, facial expressions, and speech signals, making it an invaluable resource for stress analysis and related studies. The ecological validity of the dataset was ensured by the fact that the data were collected in actual workplace environments. The biosignal data contains measurements of electrodermal activity, blood volume pressure, and cutaneous temperature, among others. High-resolution video recordings were used to capture facial expressions, allowing for a comprehensive analysis of facial cues associated with tension. In order to capture vocal characteristics indicative of tension, speech signals were recorded. The dataset contains samples from both stress-free and stressful work situations, providing a proportionate representation of various stress levels. The dataset is accompanied by extensive metadata and annotations, which facilitate in-depth analysis and interpretation. WorkStress3D is a valuable resource for developing and evaluating stress detection models, examining the impact of work environments on stress levels, and exploring the potential of multimodal data fusion for stress analysis.

2.
Data Brief ; 53: 110235, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38533115

RESUMO

In the context of neuromarketing, sales, and branding, the investigation of consumer decision-making processes presents complex and intriguing challenges. Consideration of the effects of multicultural influences and societal conditions from a global perspective enriches this multifaceted field. The application of neuroscience tools and techniques to international marketing and consumer behavior is an emerging interdisciplinary field that seeks to understand the cognitive processes, reactions, and selection mechanisms of consumers within the context of branding and sales. The NeuroBioSense dataset was prepared to analyze and classify consumer responses. This dataset includes physiological signals, facial images of the participants while watching the advertisements, and demographic information. The primary objective of the data collection process is to record and analyze the responses of human subjects to these signals during a carefully designed experiment consisting of three distinct phases, each of which features a different form of branding advertisement. Physiological signals were collected with the Empatica e4 wearable sensor device, considering non-invasive body photoplethysmography (PPG), electrodermal activity (EDA), and body temperature sensors. A total of 58 participants, aged between 18 and 70, were divided into three different groups, and data were collected. Advertisements prepared in the categories of cosmetics for 18 participants, food for 20 participants, and cars for 20 participants were watched. On the emotion evaluation scale, 7 different emotion classes are given: Joy, Surprise, anger, disgust, sadness, fear, and neutral. This dataset will help researchers analyse responses, understand and develop emotion classification studies, the relationship between consumers and advertising, and neuromarketing methods.

3.
Data Brief ; 52: 109896, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38173979

RESUMO

The prevalence of mental fatigue is a noteworthy phenomenon that can affect individuals across diverse professions and working routines. This paper provides a comprehensive dataset of physiological signals obtained from 23 participants during their professional work and questionnaires to analyze mental fatigue. The questionnaires included demographic information and Chalder Fatigue Scale scores indicating mental and physical fatigue. Both physiological signal measurements and the Chalder Fatigue Scale were performed in two sessions, morning and evening. The present dataset encompasses diverse physiological signals, including electroencephalogram (EEG), blood volume pulse (BVP), electrodermal activity (EDA), heart rate (HR), skin temperature (TEMP), and 3-axis accelerometer (ACC) data. The NeuroSky MindWave EEG device was used for brain signals, and the Empatica E4 smart wristband was used for other signals. Measurements were carried out on individuals from four different occupational groups, such as academicians, technicians, computer engineers, and kitchen workers. The provision of comprehensive metadata supplements the dataset, thereby promoting inquiries about the neurophysiological concomitants of mental fatigue, autonomic activity patterns, and the repercussions of a cognitive burden on human proficiency in actual workplace settings. The accessibility of the aforementioned dataset serves to facilitate progress in the field of mental fatigue research while also laying the groundwork for the creation of customized fatigue evaluation techniques and interventions in diverse professional domains.

4.
Data Brief ; 49: 109297, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37346930

RESUMO

The effects of chronic stress on academic and professional achievement can have a substantial impact. This relationship is highlighted through a dataset that includes questionnaires and physiological data from a group of individuals. The questionnaire data of 48 individuals, the physiological data of 20 individuals during sessions with a psychologist, and the exam data of 8 individuals were analyzed. The questionnaire data collected includes demographic information and scores on the TOAD stress scale. Physiological data was captured using the Empatica e4, a wearable device, which measured various signals such as blood volume pulse, electrodermal activity, body temperature, interbeat intervals, heart rate, and 3-axis accelerometer data. These measurements were taken under different stress conditions, both high and low, during therapy sessions and an exam respectively. Overall, this study significantly contributes to our understanding of how stress affects achievement. By providing a large dataset consisting of questionnaires and physiological data, this research helps researchers gain a better understanding of the complex relationship between stress and achievement. It also enables them to develop innovative strategies for managing stress and enhancing academic and professional success.

5.
Multimed Tools Appl ; : 1-13, 2023 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-37362640

RESUMO

Eating is experienced as an emotional social activity in any culture. There are factors that influence the emotions felt during food consumption. The emotion felt while eating has a significant impact on our lives and affects different health conditions such as obesity. In addition, investigating the emotion during food consumption is considered a multidisciplinary problem ranging from neuroscience to anatomy. In this study, we focus on evaluating the emotional experience of different participants during eating activities and aim to analyze them automatically using deep learning models. We propose a facial expression-based prediction model to eliminate user bias in questionnaire-based assessment systems and to minimize false entries to the system. We measured the neural, behavioral, and physical manifestations of emotions with a mobile app and recognize emotional experiences from facial expressions. In this research, we used three different situations to test whether there could be any factor other than the food that could affect a person's mood. We asked users to watch videos, listen to music or do nothing while eating. This way we found out that not only food but also external factors play a role in emotional change. We employed three Convolutional Neural Network (CNN) architectures, fine-tuned VGG16, and Deepface to recognize emotional responses during eating. The experimental results demonstrated that the fine-tuned VGG16 provides remarkable results with an overall accuracy of 77.68% for recognizing the four emotions. This system is an alternative to today's survey-based restaurant and food evaluation systems.

6.
Cluster Comput ; : 1-13, 2023 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-36643764

RESUMO

The Covid-19 pandemic caused uncertainties in many different organizations, institutions gained experience in remote working and showed that high-quality distance education is a crucial component in higher education. The main concern in higher education is the impact of distance education on the quality of learning during such a pandemic. Although this type of education may be considered effective and beneficial at first glance, its effectiveness highly depends on a variety of factors such as the availability of online resources and individuals' financial situations. In this study, the effectiveness of e-learning during the Covid-19 pandemic is evaluated using posted tweets, sentiment analysis, and topic modeling techniques. More than 160,000 tweets, addressing conditions related to the major change in the education system, were gathered from Twitter social network and deep learning-based sentiment analysis models and topic models based on latent dirichlet allocation (LDA) algorithm were developed and analyzed. Long short term memory-based sentiment analysis model using word2vec embedding was used to evaluate the opinions of Twitter users during distance education and also, a topic model using the LDA algorithm was built to identify the discussed topics in Twitter. The conducted experiments demonstrate the proposed model achieved an overall accuracy of 76%. Our findings also reveal that the Covid-19 pandemic has negative effects on individuals 54.5% of tweets were associated with negative emotions whereas this was relatively low on emotion reports in the YouGov survey and gender-rescaled emotion scores on Twitter. In parallel, we discuss the impact of the pandemic on education and how users' emotions altered due to the catastrophic changes allied to the education system based on the proposed machine learning-based models.

7.
Neural Comput Appl ; 35(7): 4957-4973, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34393380

RESUMO

Phishing is an attack targeting to imitate the official websites of corporations such as banks, e-commerce, financial institutions, and governmental institutions. Phishing websites aim to access and retrieve users' important information such as personal identification, social security number, password, e-mail, credit card, and other account information. Several anti-phishing techniques have been developed to cope with the increasing number of phishing attacks so far. Machine learning and particularly, deep learning algorithms are nowadays the most crucial techniques used to detect and prevent phishing attacks because of their strong learning abilities on massive datasets and their state-of-the-art results in many classification problems. Previously, two types of feature extraction techniques [i.e., character embedding-based and manual natural language processing (NLP) feature extraction] were used in isolation. However, researchers did not consolidate these features and therefore, the performance was not remarkable. Unlike previous works, our study presented an approach that utilizes both feature extraction techniques. We discussed how to combine these feature extraction techniques to fully utilize from the available data. This paper proposes hybrid deep learning models based on long short-term memory and deep neural network algorithms for detecting phishing uniform resource locator and evaluates the performance of the models on phishing datasets. The proposed hybrid deep learning models utilize both character embedding and NLP features, thereby simultaneously exploiting deep connections between characters and revealing NLP-based high-level connections. Experimental results showed that the proposed models achieve superior performance than the other phishing detection models in terms of accuracy metric.

8.
Educ Inf Technol (Dordr) ; 28(2): 1809-1831, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35967829

RESUMO

Due to the increasing number of cyber incidents and overwhelming skills shortage, it is required to evaluate the knowledge gap between cyber security education and industrial needs. As such, the objective of this study is to identify the knowledge gaps in cyber security graduates who join the cyber security workforce. We designed and performed an opinion survey by using the Cyber Security Knowledge Areas (KAs) specified in the Cyber Security Body of Knowledge (CyBOK) that comprises 19 KAs. Our data was gathered from practitioners who work in cyber security organizations. The knowledge gap was measured and evaluated by acknowledging the assumption for employing sequent data as nominal data and improved it by deploying chi-squared test. Analyses demonstrate that there is a gap that can be utilized to enhance the quality of education. According to acquired final results, three key KAs with the highest knowledge gap are Web and Mobile Security, Security Operations and Incident Management. Also, Cyber-Physical Systems (CPS), Software Lifecycles, and Vulnerabilities are the knowledge areas with largest difference in perception of importance between less and more experienced personnel. We discuss several suggestions to improve the cyber security curriculum in order to minimize the knowledge gaps. There is an expanding demand for executive cyber security personnel in industry. High-quality university education is required to improve the qualification of upcoming workforce. The capability and capacity of the national cyber security workforce is crucial for nations and security organizations. A wide range of skills, namely technical skills, implementation skills, management skills, and soft skills are required in new cyber security graduates. The use of each CyBOK KA in the industry was measured in response to the extent of learning in university environments. This is the first study conducted in this field, it is considered that this research can inspire the way for further researches.

9.
Sensors (Basel) ; 22(20)2022 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-36298282

RESUMO

Smartphone adaptation in society has been progressing at a very high speed. Having the ability to run on a vast variety of devices, much of the user base possesses an Android phone. Its popularity and flexibility have played a major role in making it a target of different attacks via malware, causing loss to users, both financially and from a privacy perspective. Different malware and their variants are emerging every day, making it a huge challenge to come up with detection and preventive methodologies and tools. Research has spawned in various directions to yield effective malware detection mechanisms. Since malware can adopt different ways to attack and hide, accurate analysis is the key to detecting them. Like any usual mobile app, malware requires permission to take action and use device resources. There are 235 total permissions that the Android app can request on a device. Malware takes advantage of this to request unnecessary permissions, which would enable those to take malicious actions. Since permissions are critical, it is important and challenging to identify if an app is exploiting permissions and causing damage. The focus of this article is to analyze the identified studies that have been conducted with a focus on permission analysis for malware detection. With this perspective, a systematic literature review (SLR) has been produced. Several papers have been retrieved and selected for detailed analysis. Current challenges and different analyses were presented using the identified articles.


Assuntos
Segurança Computacional , Aplicativos Móveis , Smartphone , Privacidade
10.
Comput Biol Med ; 150: 106132, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36195047

RESUMO

Phantom limb pain after amputation is a debilitating condition that negatively affects activities of daily life and the quality of life of amputees. Most amputees are able to control the movement of the missing limb, which is called the phantom limb movement. Recognition of these movements is crucial for both technology-based amputee rehabilitation and prosthetic control. The aim of the current study is to classify and recognize the phantom movements in four different amputation levels of the upper and lower extremities. In the current study, we utilized ensemble learning algorithms for the recognition and classification of phantom movements of the different amputation levels of the upper and lower extremity. In this context, sEMG signals obtained from 38 amputees and 25 healthy individuals were collected and the dataset was created. Studies of processing sEMG signals in amputees are rather limited, and studies are generally on the classification of upper extremity and hand movements. Our study demonstrated that the ensemble learning-based models resulted in higher accuracy in the detection of phantom movements. The ensemble learning-based approaches outperformed the SVM, Decision tree, and kNN methods. The accuracy of the movement pattern recognition in healthy people was up to 96.33%, this was at most 79.16% in amputees.


Assuntos
Membro Fantasma , Qualidade de Vida , Humanos , Eletromiografia/métodos , Mãos , Extremidade Superior , Movimento , Aprendizado de Máquina
11.
Sensors (Basel) ; 22(13)2022 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-35808230

RESUMO

Smartphones have enabled the widespread use of mobile applications. However, there are unrecognized defects of mobile applications that can affect businesses due to a negative user experience. To avoid this, the defects of applications should be detected and removed before release. This study aims to develop a defect prediction model for mobile applications. We performed cross-project and within-project experiments and also used deep learning algorithms, such as convolutional neural networks (CNN) and long short term memory (LSTM) to develop a defect prediction model for Android-based applications. Based on our within-project experimental results, the CNN-based model provides the best performance for mobile application defect prediction with a 0.933 average area under ROC curve (AUC) value. For cross-project mobile application defect prediction, there is still room for improvement when deep learning algorithms are preferred.


Assuntos
Aprendizado Profundo , Aplicativos Móveis , Algoritmos , Área Sob a Curva , Redes Neurais de Computação
12.
Knowl Inf Syst ; 64(6): 1457-1500, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35645443

RESUMO

Phishing attacks aim to steal confidential information using sophisticated methods, techniques, and tools such as phishing through content injection, social engineering, online social networks, and mobile applications. To avoid and mitigate the risks of these attacks, several phishing detection approaches were developed, among which deep learning algorithms provided promising results. However, the results and the corresponding lessons learned are fragmented over many different studies and there is a lack of a systematic overview of the use of deep learning algorithms in phishing detection. Hence, we performed a systematic literature review (SLR) to identify, assess, and synthesize the results on deep learning approaches for phishing detection as reported by the selected scientific publications. We address nine research questions and provide an overview of how deep learning algorithms have been used for phishing detection from several aspects. In total, 43 journal articles were selected from electronic databases to derive the answers for the defined research questions. Our SLR study shows that except for one study, all the provided models applied supervised deep learning algorithms. The widely used data sources were URL-related data, third party information on the website, website content-related data, and email. The most used deep learning algorithms were deep neural networks (DNN), convolutional neural networks, and recurrent neural networks/long short-term memory networks. DNN and hybrid deep learning algorithms provided the best performance among other deep learning-based algorithms. 72% of the studies did not apply any feature selection algorithm to build the prediction model. PhishTank was the most used dataset among other datasets. While Keras and Tensorflow were the most preferred deep learning frameworks, 46% of the articles did not mention any framework. This study also highlights several challenges for phishing detection to pave the way for further research.

13.
Artigo em Inglês | MEDLINE | ID: mdl-35565088

RESUMO

Stress has been designated the "Health Epidemic of the 21st Century" by the World Health Organization and negatively affects the quality of individuals' lives by detracting most body systems. In today's world, different methods are used to track and measure various types of stress. Among these techniques, experience sampling is a unique method for studying everyday stress, which can affect employees' performance and even their health by threatening them emotionally and physically. The main advantage of experience sampling is that evaluating instantaneous experiences causes less memory bias than traditional retroactive measures. Further, it allows the exploration of temporal relationships in subjective experiences. The objective of this paper is to structure, analyze, and characterize the state of the art of available literature in the field of surveillance of work stress via the experience sampling method. We used the formal research methodology of systematic mapping to conduct a breadth-first review. We found 358 papers between 2010 and 2021 that are classified with respect to focus, research type, and contribution type. The resulting research landscape summarizes the opportunities and challenges of utilizing the experience sampling method on stress detection for practitioners and academics.


Assuntos
Estresse Ocupacional , Humanos , Estresse Ocupacional/epidemiologia
14.
Sensors (Basel) ; 22(7)2022 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-35408166

RESUMO

Software defect prediction studies aim to predict defect-prone components before the testing stage of the software development process. The main benefit of these prediction models is that more testing resources can be allocated to fault-prone modules effectively. While a few software defect prediction models have been developed for mobile applications, a systematic overview of these studies is still missing. Therefore, we carried out a Systematic Literature Review (SLR) study to evaluate how machine learning has been applied to predict faults in mobile applications. This study defined nine research questions, and 47 relevant studies were selected from scientific databases to respond to these research questions. Results show that most studies focused on Android applications (i.e., 48%), supervised machine learning has been applied in most studies (i.e., 92%), and object-oriented metrics were mainly preferred. The top five most preferred machine learning algorithms are Naïve Bayes, Support Vector Machines, Logistic Regression, Artificial Neural Networks, and Decision Trees. Researchers mostly preferred Object-Oriented metrics. Only a few studies applied deep learning algorithms including Long Short-Term Memory (LSTM), Deep Belief Networks (DBN), and Deep Neural Networks (DNN). This is the first study that systematically reviews software defect prediction research focused on mobile applications. It will pave the way for further research in mobile software fault prediction and help both researchers and practitioners in this field.


Assuntos
Aprendizado de Máquina , Aplicativos Móveis , Algoritmos , Teorema de Bayes , Redes Neurais de Computação , Máquina de Vetores de Suporte
15.
Sensors (Basel) ; 22(4)2022 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-35214212

RESUMO

In recent years, research into blockchain technology and the Internet of Things (IoT) has grown rapidly due to an increase in media coverage. Many different blockchain applications and platforms have been developed for different purposes, such as food safety monitoring, cryptocurrency exchange, and secure medical data sharing. However, blockchain platforms cannot store all the generated data. Therefore, they are supported with data warehouses, which in turn is called a hybrid blockchain platform. While several systems have been developed based on this idea, a current state-of-the-art systematic overview on the use of hybrid blockchain platforms is lacking. Therefore, a systematic literature review (SLR) study has been carried out by us to investigate the motivations for adopting them, the domains at which they were used, the adopted technologies that made this integration effective, and, finally, the challenges and possible solutions. This study shows that security, transparency, and efficiency are the top three motivations for adopting these platforms. The energy, agriculture, health, construction, manufacturing, and supply chain domains are the top domains. The most adopted technologies are cloud computing, fog computing, telecommunications, and edge computing. While there are several benefits of using hybrid blockchains, there are also several challenges reported in this study.


Assuntos
Blockchain , Internet das Coisas , Computação em Nuvem , Atenção à Saúde , Disseminação de Informação
16.
Sensors (Basel) ; 21(21)2021 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-34770422

RESUMO

Providing a stable, low-price, and safe supply of energy to end-users is a challenging task. The energy service providers are affected by several events such as weather, volatility, and special events. As such, the prediction of these events and having a time window for taking preventive measures are crucial for service providers. Electrical load forecasting can be modeled as a time series prediction problem. One solution is to capture spatial correlations, spatial-temporal relations, and time-dependency of such temporal networks in the time series. Previously, different machine learning methods have been used for time series prediction tasks; however, there is still a need for new research to improve the performance of short-term load forecasting models. In this article, we propose a novel deep learning model to predict electric load consumption using Dual-Stage Attention-Based Recurrent Neural Networks in which the attention mechanism is used in both encoder and decoder stages. The encoder attention layer identifies important features from the input vector, whereas the decoder attention layer is used to overcome the limitations of using a fixed context vector and provides a much longer memory capacity. The proposed model improves the performance for short-term load forecasting (STLF) in terms of the Mean Absolute Error (MAE) and Root Mean Squared Errors (RMSE) scores. To evaluate the predictive performance of the proposed model, the UCI household electric power consumption (HEPC) dataset has been used during the experiments. Experimental results demonstrate that the proposed approach outperforms the previously adopted techniques.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Previsões , Tempo , Tempo (Meteorologia)
17.
BMC Med Inform Decis Mak ; 21(1): 210, 2021 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-34238281

RESUMO

BACKGROUND: Healthcare relies on health information systems (HISs) to support the care and receive reimbursement for the care provided. Healthcare providers experience many problems with their HISs due to improper architecture design. To support the design of a proper HIS architecture, a reference architecture (RA) can be used that meets the various stakeholder concerns of HISs. Therefore, the objective of this study is to develop and analyze an RA following well-established architecture design methods. METHODS: Domain analysis was performed to scope and model the domain of HISs. For the architecture design, we applied the views and beyond approach and designed the RA's views based on the stakeholders and features from the domain analysis. We evaluated the RA with a case study. RESULTS: We derived the following four architecture views for HISs: The context diagram, decomposition view, layered view, and deployment view. Each view shows the architecture of the HIS from a different angle, suitable for various stakeholders. Based on a Japanese hospital information system study, we applied the RA and derived the application architecture. CONCLUSION: We demonstrated that the methods of the software architecture design community could be used in the healthcare domain effectively and showed the applicability of the RA.


Assuntos
Sistemas de Informação em Saúde , Sistemas de Informação Hospitalar , Atenção à Saúde , Humanos
18.
Comput Biol Med ; 133: 104365, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33866251

RESUMO

Precision Nutrition research aims to use personal information about individuals or groups of individuals to deliver nutritional advice that, theoretically, would be more suitable than generic advice. Machine learning, a subbranch of Artificial Intelligence, has promise to aid in the development of predictive models that are suitable for Precision Nutrition. As such, recent research has applied machine learning algorithms, tools, and techniques in precision nutrition for different purposes. However, a systematic overview of the state-of-the-art on the use of machine learning in Precision Nutrition is lacking. Therefore, we carried out a Systematic Literature Review (SLR) to provide an overview of where and how machine learning has been used in Precision Nutrition from various aspects, what such machine learning models use as input features, what the availability status of the data used in the literature is, and how the models are evaluated. Nine research questions were defined in this study. We retrieved 4930 papers from electronic databases and 60 primary studies were selected to respond to the research questions. All of the selected primary studies were also briefly discussed in this article. Our results show that fifteen problems spread across seven domains of nutrition and health are present. Four machine learning tasks are seen in the form of regression, classification, recommendation and clustering, with most of these utilizing a supervised approach. In total, 30 algorithms were used, with 19 appearing more than once. Models were through the use of four groups of approaches and 23 evaluation metrics. Personalized approaches are promising to reduce the burden of these current problems in nutrition research, and the current review shows Machine Learning can be incorporated into Precision Nutrition research with high performance. Precision Nutrition researchers should consider incorporating Machine Learning into their methods to facilitate the integration of many complex features, allowing for the development of high-performance Precision Nutrition approaches.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Algoritmos , Bases de Dados Factuais , Humanos , Estado Nutricional
19.
Sensors (Basel) ; 21(3)2021 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-33573297

RESUMO

Predictive maintenance of production lines is important to early detect possible defects and thus identify and apply the required maintenance activities to avoid possible breakdowns. An important concern in predictive maintenance is the prediction of remaining useful life (RUL), which is an estimate of the number of remaining years that a component in a production line is estimated to be able to function in accordance with its intended purpose before warranting replacement. In this study, we propose a novel machine learning-based approach for automating the prediction of the failure of equipment in continuous production lines. The proposed model applies normalization and principle component analysis during the pre-processing stage, utilizes interpolation, uses grid search for parameter optimization, and is built with multilayer perceptron neural network (MLP) machine learning algorithm. We have evaluated the approach using a case study research to predict the RUL of engines on NASA turbo engine datasets. Experimental results demonstrate that the performance of our proposed model is effective in predicting the RUL of turbo engines and likewise substantially enhances predictive maintenance results.

20.
Prev Vet Med ; 187: 105237, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33418514

RESUMO

In recent years, several researchers and practitioners applied machine learning algorithms in the dairy farm context and discussed several solutions to predict various variables of interest, most of which were related to incipient diseases. The objective of this article is to identify, assess, and synthesize the papers that discuss the application of machine learning in the dairy farm management context. Using a systematic literature review (SLR) protocol, we retrieved 427 papers, of which 38 papers were determined as primary studies and thus were analysed in detail. More than half of the papers (55 %) addressed disease detection. The other two categories of problems addressed were milk production and milk quality. Seventy-one independent variables were identified and grouped into seven categories. The two prominent categories that were used in more than half of the papers were milking parameters and milk properties. The other categories of independent variables were milk content, pregnancy/calving information, cow characteristics, lactation, and farm characteristics. Twenty-three algorithms were identified, which we grouped into four categories. Decision tree-based algorithms are by far the most used followed by artificial neural network-based algorithms. Regression-based algorithms and other algorithms that do not belong to the previous categories were used in 13 papers. Twenty-three evaluation parameters were identified of which 7 were used 3 or more times. The three evaluation parameters that were used by more than half of the papers are sensitivity, specificity, RMSE. The challenges most encountered were feature selection and unbalanced data and together with problem size, overfitting/estimating, and parameter tuning account for three-quarters of the challenges identified. To the best of our knowledge, this is the first SLR study on the use of machine learning to improve dairy farm management, and to this end, this study will be valuable not only for researchers but also practitioners in dairy farms.


Assuntos
Indústria de Laticínios/métodos , Aprendizado de Máquina/estatística & dados numéricos , Animais , Bovinos , Feminino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...